11 research outputs found

    Video Quality Metrics

    Get PDF

    Towards the Effects of Alignment Edits on the Quality of Experience of 360 Videos

    Full text link
    The optimization of viewers' quality of experience (QoE) in 360 videos faces two major roadblocks: inaccurate adaptive streaming and viewers missing the plot of a story. Alignment edit emerged as a promising mechanism to avoid both issues at once. Alignment edits act on the content, matching the users' viewport with a region of interest in the video content. As a consequence, viewers' attention is focused, reducing exploratory behavior and enabling the optimization of network resources; in addition, it allows for a precise selection of events to be shown to viewers, supporting viewers to follow the storyline. In this work, we investigate the effects of alignment edits on QoE by conducting two user studies. Specifically, we measured three QoE factors: presence, comfort, and overall QoE. We introduce a new alignment edit, named \textit{Fade-rotation}, based on a mechanism to reduce cybersickness in VR games. In the user studies, we tested four versions of fade-rotation and compared them with instant alignment. We observed from the results that gradual alignment achieves good levels of comfort for all contents and rotational speed tested, showing its validity. We observed a decrease in head motion after both alignment edits, with the gradual edit reaching a reduction in head speed of 8\% greater than that of instant alignment, confirming the usefulness of these edits for streaming video on-demand. Finally, parameters to implement \textit{Fade-rotation} are described.Comment: 14 pages, 13 figures, 4 table

    Referenceless image quality assessment by saliency, color-texture energy, and gradient boosting machines

    No full text
    Abstract In most practical multimedia applications, processes are used to manipulate the image content. These processes include compression, transmission, or restoration techniques, which often create distortions that may be visible to human subjects. The design of algorithms that can estimate the visual similarity between a distorted image and its non-distorted version, as perceived by a human viewer, can lead to significant improvements in these processes. Therefore, over the last decades, researchers have been developing quality metrics (i.e., algorithms) that estimate the quality of images in multimedia applications. These metrics can make use of either the full pristine content (full-reference metrics) or only of the distorted image (referenceless metric). This paper introduces a novel referenceless image quality assessment (RIQA) metric, which provides significant improvements when compared to other state-of-the-art methods. The proposed method combines statistics of the opposite color local variance pattern (OC-LVP) descriptor with statistics of the opposite color local salient pattern (OC-LSP) descriptor. Both OC-LVP and OC-LSP descriptors, which are proposed in this paper, are extensions of the opposite color local binary pattern (OC-LBP) operator. Statistics of these operators generate features that are mapped into subjective quality scores using a machine-learning approach. Specifically, to fit a predictive model, features are used as input to a gradient boosting machine (GBM). Results show that the proposed method is robust and accurate, outperforming other state-of-the-art RIQA methods

    A content-based viewport prediction model

    No full text
    Faculdade de Tecnologia (FT)Departamento de Engenharia Civil e Ambiental (FT ENC
    corecore